1 research outputs found
How To Overcome Confirmation Bias in Semi-Supervised Image Classification By Active Learning
Do we need active learning? The rise of strong deep semi-supervised methods
raises doubt about the usability of active learning in limited labeled data
settings. This is caused by results showing that combining semi-supervised
learning (SSL) methods with a random selection for labeling can outperform
existing active learning (AL) techniques. However, these results are obtained
from experiments on well-established benchmark datasets that can overestimate
the external validity. However, the literature lacks sufficient research on the
performance of active semi-supervised learning methods in realistic data
scenarios, leaving a notable gap in our understanding. Therefore we present
three data challenges common in real-world applications: between-class
imbalance, within-class imbalance, and between-class similarity. These
challenges can hurt SSL performance due to confirmation bias. We conduct
experiments with SSL and AL on simulated data challenges and find that random
sampling does not mitigate confirmation bias and, in some cases, leads to worse
performance than supervised learning. In contrast, we demonstrate that AL can
overcome confirmation bias in SSL in these realistic settings. Our results
provide insights into the potential of combining active and semi-supervised
learning in the presence of common real-world challenges, which is a promising
direction for robust methods when learning with limited labeled data in
real-world applications.Comment: Accepted @ ECML PKDD 2023. This is the author's version of the work.
The definitive Version of Record will be published in the Proceedings of ECML
PKDD 202